95 research outputs found

    Design-time Models for Resiliency

    Get PDF
    Resiliency in process-aware information systems is based on the availability of recovery flows and alternative data for coping with missing data. In this paper, we discuss an approach to process and information modeling to support the specification of recovery flows and alternative data. In particular, we focus on processes using sensor data from different sources. The proposed model can be adopted to specify resiliency levels of information systems, based on event-based and temporal constraints

    On Handling Business Process Anomalies through Artifact-based Modeling

    Get PDF
    Control flow-based process modeling notations, like BPMN, are good at dening the normal execution flow and the management of foreseen exceptions. When unforeseen situations occur, one cannot detect if the execution is still acceptable with respect to the process definition. In contrast, artifact-centric process modeling notations, like the Guard-Stage-Milestone (GSM), are better suited for this kind of scenarios: they define a process in terms of acceptable states and do not enforce any specific execution flow. This improves flexibility, but hampers the clarity of the defined models. The goal of this paper is to show how an extension of GSM, i.e., E-GSM, can be used to detect deviations from the execution path as modeled in BPMN, while keeping the process execution alive

    mArtifact: an Artifact-driven Process Monitoring Platform

    Get PDF
    Traditionally, human intervention is required to monitor a business process. Operators notify when manual activities are executed, and manually restart the monitoring whenever the process is not executed as expected. This paper presents mArtifact, an artifact-driven process monitoring platform. mArtifact uses the E-GSM artifact-centric language to represent the process. This way, when a violation occurs, it can flag the affected activities without halting the monitoring. By predicating on the conditions of the physical artifacts participating in a process, mArtifact autonomously detects when activities are executed and constraints are violated. The audience is expected to be familiar with business process monitoring and artifact-centric modeling languages

    Application Driven IT Service Management for Energy Efficiency

    Get PDF
    Considering the ever increasing of information technology usage in our everyday life and the huge concentration of computational resources at remote service centers, energy costs become one of the biggest challenging issues for IT managers. Mechanisms to improve energy efficiency in service centers are divided at different levels which range from single components to the whole facility, considering both equipment and application issues. In this paper we focus on analyzing energy efficiency issues at the application level, focusing on e-business processes. Our approach proposes a new method to evaluate and to apply green adaptations strategies based on the service application characteristics with respect to the business process taking into account non-functional requirements

    1 A Survey on Service Quality Description

    Get PDF
    Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in compute

    Towards a Center for Modeling and Simulation: The Case for Jordan

    Get PDF
    Modeling and Simulation (M&S) has recently become an important area that is pursued by many researchers and practitioners due to the role it plays in understanding complex systems and problems. We have therefore witnessed the establishment of many M&S organizations in the last two decades especially in the more developed world. Less developed countries are starting to recognize the need for such capability especially that the problems they face are not less complex. In this paper, we present a preliminary study towards a business plan for establishing a scientific center for Modeling, Analysis, Simulation and Animation in Jordan (JoSAMA) and the value it can bring to the academic, industrial and governmental communities in Jordan and potentially in the Middle East. This effort was funded by the Fulbright Specialist Program and hosted by the German-Jordanian University, Amman, Jordan

    Information logistics and fog computing: The DITAS∗ approach

    Get PDF
    Data-intensive applications are usually developed based on Cloud resources whose service delivery model helps towards building reliable and scalable solutions. However, especially in the context of Internet of Things-based applications, Cloud Computing comes with some limitations as data, generated at the edge of the network, are processed at the core of the network producing security, privacy, and latency issues. On the other side, Fog Computing is emerging as an extension of Cloud Computing, where resources located at the edge of the network are used in combination with cloud services. The goal of this paper is to present the approach adopted in the recently started DITAS project: the design of a Cloud platform is proposed to optimize the development of data-intensive applications providing information logistics tools that are able to deliver information and computation resources at the right time, right place, with the right quality. Applications that will be developed with DITAS tools live in a Fog Computing environment, where data move from the cloud to the edge and vice versa to provide secure, reliable, and scalable solutions with excellent performance
    • …
    corecore